通常在高维生物数据集中发现的最常见的缺陷之一是特征之间的相关性。这可能导致统计和机器学习方法过度或低估这些相关预测因素,而真正相关的则被忽略。在本文中,我们将定义一种名为“成对置换算法}(PPA)的新方法,其目的是在特征重要性值中减轻相关偏差。首先,我们提供了一个理论基础,在以前的工作中建立了折射重要性。然后将PPA应用于玩具数据集,我们展示了校正相关效果的能力。我们进一步测试PPA在微生物霰弹枪数据集上,表明PPA已经能够获得生物相关的生物标志物。
translated by 谷歌翻译
灾难性的遗忘是阻碍在持续学习环境中部署深度学习算法的一个重大问题。已经提出了许多方法来解决灾难性的遗忘问题,在学习新任务时,代理商在旧任务中失去了其旧任务的概括能力。我们提出了一项替代策略,可以通过知识合并(CFA)处理灾难性遗忘,该策略从多个专门从事以前任务的多个异构教师模型中学习了学生网络,并可以应用于当前的离线方法。知识融合过程以单头方式进行,只有选定数量的记忆样本,没有注释。教师和学生不需要共享相同的网络结构,可以使异质任务适应紧凑或稀疏的数据表示。我们将我们的方法与不同策略的竞争基线进行比较,证明了我们的方法的优势。
translated by 谷歌翻译
图神经网络(GNN)已成为与图形和类似拓扑数据结构有关的无数任务的骨干。尽管已经在与节点和图形分类/回归任务有关的域中建立了许多作品,但它们主要处理单个任务。在图形上的持续学习在很大程度上没有探索,现有的图形持续学习方法仅限于任务的学习方案。本文提出了一个持续学习策略,该策略结合了基于架构和基于内存的方法。结构学习策略是由强化学习驱动的,在该学习中,对控制器网络进行了这种方式,以确定观察到新任务时从基本网络中添加/修剪的最佳节点,从而确保足够的网络能力。参数学习策略的基础是黑暗体验重播方法的概念,以应对灾难性的遗忘问题。我们的方法在任务收入学习和课堂学习设置中都通过几个图的连续学习基准问题进行了数值验证。与最近发表的作品相比,我们的方法在这两种设置中都表明了性能的提高。可以在\ url {https://github.com/codexhammer/gcl}上找到实现代码。
translated by 谷歌翻译
跨域多式分类是一个具有挑战性的问题,要求快速域适应以处理在永无止境和快速变化的环境中的不同但相关的流。尽管现有的多式分类器在目标流中没有标记的样品,但它们仍然会产生昂贵的标签成本,因为它们需要完全标记的源流样品。本文旨在攻击跨域多发行分类问题中极端标签短缺问题的问题,在过程运行之前,仅提供了很少的标记源流样品。我们的解决方案,即从部分地面真理(Leopard)中学习的流流过程,建立在一个灵活的深度聚类网络上,在该网络中,其隐藏的节点,层和簇被添加并在不同的数据分布方面动态删除。同时的特征学习和聚类技术为群集友好的潜在空间提供了同时的特征学习和聚类技术的基础。域的适应策略依赖于对抗域的适应技术,在该技术中,训练特征提取器以欺骗域分类器对源和目标流进行分类。我们的数值研究证明了豹子的功效,在24例中,与突出算法相比,它可以提高性能的改善。豹子的源代码在\ url {https://github.com/wengweng001/leopard.git}中共享。
translated by 谷歌翻译
Advances in computer vision and machine learning techniques have led to significant development in 2D and 3D human pose estimation from RGB cameras, LiDAR, and radars. However, human pose estimation from images is adversely affected by occlusion and lighting, which are common in many scenarios of interest. Radar and LiDAR technologies, on the other hand, need specialized hardware that is expensive and power-intensive. Furthermore, placing these sensors in non-public areas raises significant privacy concerns. To address these limitations, recent research has explored the use of WiFi antennas (1D sensors) for body segmentation and key-point body detection. This paper further expands on the use of the WiFi signal in combination with deep learning architectures, commonly used in computer vision, to estimate dense human pose correspondence. We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input. This paves the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing.
translated by 谷歌翻译
Due to the environmental impacts caused by the construction industry, repurposing existing buildings and making them more energy-efficient has become a high-priority issue. However, a legitimate concern of land developers is associated with the buildings' state of conservation. For that reason, infrared thermography has been used as a powerful tool to characterize these buildings' state of conservation by detecting pathologies, such as cracks and humidity. Thermal cameras detect the radiation emitted by any material and translate it into temperature-color-coded images. Abnormal temperature changes may indicate the presence of pathologies, however, reading thermal images might not be quite simple. This research project aims to combine infrared thermography and machine learning (ML) to help stakeholders determine the viability of reusing existing buildings by identifying their pathologies and defects more efficiently and accurately. In this particular phase of this research project, we've used an image classification machine learning model of Convolutional Neural Networks (DCNN) to differentiate three levels of cracks in one particular building. The model's accuracy was compared between the MSX and thermal images acquired from two distinct thermal cameras and fused images (formed through multisource information) to test the influence of the input data and network on the detection results.
translated by 谷歌翻译
The advances in Artificial Intelligence are creating new opportunities to improve lives of people around the world, from business to healthcare, from lifestyle to education. For example, some systems profile the users using their demographic and behavioral characteristics to make certain domain-specific predictions. Often, such predictions impact the life of the user directly or indirectly (e.g., loan disbursement, determining insurance coverage, shortlisting applications, etc.). As a result, the concerns over such AI-enabled systems are also increasing. To address these concerns, such systems are mandated to be responsible i.e., transparent, fair, and explainable to developers and end-users. In this paper, we present ComplAI, a unique framework to enable, observe, analyze and quantify explainability, robustness, performance, fairness, and model behavior in drift scenarios, and to provide a single Trust Factor that evaluates different supervised Machine Learning models not just from their ability to make correct predictions but from overall responsibility perspective. The framework helps users to (a) connect their models and enable explanations, (b) assess and visualize different aspects of the model, such as robustness, drift susceptibility, and fairness, and (c) compare different models (from different model families or obtained through different hyperparameter settings) from an overall perspective thereby facilitating actionable recourse for improvement of the models. It is model agnostic and works with different supervised machine learning scenarios (i.e., Binary Classification, Multi-class Classification, and Regression) and frameworks. It can be seamlessly integrated with any ML life-cycle framework. Thus, this already deployed framework aims to unify critical aspects of Responsible AI systems for regulating the development process of such real systems.
translated by 谷歌翻译
Model calibration, which is concerned with how frequently the model predicts correctly, not only plays a vital part in statistical model design, but also has substantial practical applications, such as optimal decision-making in the real world. However, it has been discovered that modern deep neural networks are generally poorly calibrated due to the overestimation (or underestimation) of predictive confidence, which is closely related to overfitting. In this paper, we propose Annealing Double-Head, a simple-to-implement but highly effective architecture for calibrating the DNN during training. To be precise, we construct an additional calibration head-a shallow neural network that typically has one latent layer-on top of the last latent layer in the normal model to map the logits to the aligned confidence. Furthermore, a simple Annealing technique that dynamically scales the logits by calibration head in training procedure is developed to improve its performance. Under both the in-distribution and distributional shift circumstances, we exhaustively evaluate our Annealing Double-Head architecture on multiple pairs of contemporary DNN architectures and vision and speech datasets. We demonstrate that our method achieves state-of-the-art model calibration performance without post-processing while simultaneously providing comparable predictive accuracy in comparison to other recently proposed calibration methods on a range of learning tasks.
translated by 谷歌翻译
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes. Recent results in the literature show that representations learned by a single classifier over many classes are competitive on few-shot learning problems with representations learned by special-purpose algorithms designed for such problems. We offer an explanation for this phenomenon based on the concept of class-features variability collapse, which refers to the training dynamics of deep classification networks where the feature embeddings of samples belonging to the same class tend to concentrate around their class means. More specifically, we examine the few-shot error of the learned feature map, which is the classification error of the nearest class-center classifier using centers learned from a small number of random samples from each class. Assuming that the classes appearing in the data are selected independently from a distribution, we show that the few-shot error generalizes from the training data to unseen test data, and we provide an upper bound on the expected few-shot error for new classes (selected from the same distribution) using the average few-shot error for the source classes. Additionally, we show that the few-shot error on the training data can be upper bounded using the degree of class-features variability collapse. This suggests that foundation models can provide feature maps that are transferable to new downstream tasks even with limited data available.
translated by 谷歌翻译
Dataset scaling, also known as normalization, is an essential preprocessing step in a machine learning pipeline. It is aimed at adjusting attributes scales in a way that they all vary within the same range. This transformation is known to improve the performance of classification models, but there are several scaling techniques to choose from, and this choice is not generally done carefully. In this paper, we execute a broad experiment comparing the impact of 5 scaling techniques on the performances of 20 classification algorithms among monolithic and ensemble models, applying them to 82 publicly available datasets with varying imbalance ratios. Results show that the choice of scaling technique matters for classification performance, and the performance difference between the best and the worst scaling technique is relevant and statistically significant in most cases. They also indicate that choosing an inadequate technique can be more detrimental to classification performance than not scaling the data at all. We also show how the performance variation of an ensemble model, considering different scaling techniques, tends to be dictated by that of its base model. Finally, we discuss the relationship between a model's sensitivity to the choice of scaling technique and its performance and provide insights into its applicability on different model deployment scenarios. Full results and source code for the experiments in this paper are available in a GitHub repository.\footnote{https://github.com/amorimlb/scaling\_matters}
translated by 谷歌翻译